Goto

Collaborating Authors

 ga ts



AirFed: A Federated Graph-Enhanced Multi-Agent Reinforcement Learning Framework for Multi-UAV Cooperative Mobile Edge Computing

Wang, Zhiyu, Raj, Suman, Buyya, Rajkumar

arXiv.org Artificial Intelligence

Multiple Unmanned Aerial Vehicles (UAVs) cooperative Mobile Edge Computing (MEC) systems face critical challenges in coordinating trajectory planning, task offloading, and resource allocation while ensuring Quality of Service (QoS) under dynamic and uncertain environments. Existing approaches suffer from limited scalability, slow convergence, and inefficient knowledge sharing among UAVs, particularly when handling large-scale IoT device deployments with stringent deadline constraints. This paper proposes AirFed, a novel federated graph-enhanced multi-agent reinforcement learning framework that addresses these challenges through three key innovations. First, we design dual-layer dynamic Graph Attention Networks (GATs) that explicitly model spatial-temporal dependencies among UAVs and IoT devices, capturing both service relationships and collaborative interactions within the network topology. Second, we develop a dual-Actor single-Critic architecture that jointly optimizes continuous trajectory control and discrete task offloading decisions. Third, we propose a reputation-based decentralized federated learning mechanism with gradient-sensitive adaptive quantization, enabling efficient and robust knowledge sharing across heterogeneous UAVs. Extensive experiments demonstrate that AirFed achieves 42.9% reduction in weighted cost compared to state-of-the-art baselines, attains over 99% deadline satisfaction and 94.2% IoT device coverage rate, and reduces communication overhead by 54.5%. Scalability analysis confirms robust performance across varying UAV numbers, IoT device densities, and system scales, validating AirFed's practical applicability for large-scale UAV-MEC deployments.


5975754c7650dfee0682e06e1fec0522-Supplemental-Conference.pdf

Neural Information Processing Systems

Appendix for "What makes graph neural networks miscalibrated?" We report the homophily index proposed by Pei et al. Number of node i's neighbors who have the same label as i Number of i's neighbors . We follow the setting of Shchur et al. Both models consist of 2 layers and the hidden dimension is fixed to 64. Figure 1 illustrates the aforementioned data partition in our experiments.



Improving Graph Attention Networks with Large Margin-based Constraints

Wang, Guangtao, Ying, Rex, Huang, Jing, Leskovec, Jure

arXiv.org Machine Learning

Graph Attention Networks (GA Ts) are the state-of-the-art neural architecture for representation learning with graphs. GA Ts learn attention functions that assign weights to nodes so that different nodes have different influences in the feature aggregation steps. In practice, however, induced attention functions are prone to over-fitting due to increasing number of parameters and the lack of direct supervision on attention weights. GA Ts also suffer from over-smoothing at the decision boundary of nodes. Here we propose a framework to address their weaknesses via margin-based constraints on attention during training. We first theoretically demonstrate the over-smoothing behavior of GA Ts and then develop an approach using constraint on the attention weights according to the class boundary and feature aggregation pattern. Furthermore, to alleviate the over-fitting problem, we propose additional constraints on graph structure. Extensive experiments and ablation studies on common benchmark datasets demonstrate the effectiveness of our method, which leads to significant improvements over the previous state-of-the-art graph attention methods on all datasets. Introduction Many real world applications involve graph data, like social networks (Zhang and Chen 2018), chemical molecules (Gilmer et al. 2017), and recommender systems (Berg, Kipf, and Welling 2017). The complicated structures of these graphs have inspired new machine learning methods (Cai, Zheng, and Chang 2018; Wu et al. 2019b). Recently much attention and progress has been made on graph neural networks, which have been successfully applied to social network analysis (Battaglia et al. 2016), recommendation systems (Ying et al. 2018), and machine reading comprehension (Tu et al. 2019; De Cao, Aziz, and Titov 2018). Recently, a novel architecture leveraging attention mechanism in Graph Neural Networks (GNNs) called Graph Attention Networks (GA Ts) was introduced (V eli ˇ ckovi c et al. 2017). GA T was motivated by attention mechanism in natural language processing (V aswani et al. 2017; Devlin et al. 2018).


Context-Aware Graph Attention Networks

Jiang, Bo, Wang, Leiling, Tang, Jin, Luo, Bin

arXiv.org Machine Learning

Graph Neural Networks (GNNs) have been widely studied for graph data representation and learning. However, existing GNNs generally conduct context-aware learning on node feature representation only which usually ignores the learning of edge (weight) representation. In this paper, we propose a novel unified GNN model, named Context-aware Adaptive Graph Attention Network (CaGAT). CaGAT aims to learn a context-aware attention representation for each graph edge by further exploiting the context relationships among different edges. In particular, CaGAT conducts context-aware learning on both node feature representation and edge (weight) representation simultaneously and cooperatively in a unified manner which can boost their respective performance in network training. We apply CaGAT on semi-supervised learning tasks. Promising experimental results on several benchmark datasets demonstrate the effectiveness and benefits of CaGAT.